Skip to content

Conversation

@faran928
Copy link
Contributor

Summary: trochrec uneven sharding changes

Differential Revision: D79603009

@meta-codesync
Copy link
Contributor

meta-codesync bot commented Nov 10, 2025

@faran928 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D79603009.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 10, 2025
faran928 added a commit to faran928/torchrec that referenced this pull request Nov 11, 2025
… pool (meta-pytorch#3533)

Summary:

A few changes in the diff:
1. Support to proportionally shard the tensor pool based on memory capacity per rank. 
2. Using block_bucketize_sparse_features_inference to return bucket_mapping that can be used during request batching in inference w/ custom sigrid predictor engine
3. Wrapping some of the operations with fx wrappers to make it compatible with model split boundaries for DLRM serving where embeddings are sharded and split onto different pytorch modules
4. Exposing set_device() api to some of the modules if we want to place some shards to cpu while others to cuda.
5. Move _get_unbucketize_tensor_via_length_alignment to common util files.

Differential Revision: D79603009
@faran928 faran928 force-pushed the export-D79603009 branch 2 times, most recently from 4e20f9e to bb2739b Compare November 12, 2025 15:50
faran928 added a commit to faran928/torchrec that referenced this pull request Nov 12, 2025
… pool (meta-pytorch#3533)

Summary:

A few changes in the diff:
1. Support to proportionally shard the tensor pool based on memory capacity per rank. 
2. Using block_bucketize_sparse_features_inference to return bucket_mapping that can be used during request batching in inference w/ custom sigrid predictor engine
3. Wrapping some of the operations with fx wrappers to make it compatible with model split boundaries for DLRM serving where embeddings are sharded and split onto different pytorch modules
4. Exposing set_device() api to some of the modules if we want to place some shards to cpu while others to cuda.
5. Move _get_unbucketize_tensor_via_length_alignment to common util files.

As part of this diff, also had to update some of the test cases. Mainly because updating the forward path a bit leads to reorganization of return values of remote module in the test cases leading to reorganization of batchinfo for each of those output. 

Baseline test
Full Output: https://www.internalfb.com/intern/everpaste/?handle=GMpN0B-VwcKcvAgDAB_hIkOvKrJNbsIXAAAB&phabricator_paste_number=2035160680
Remote graph: https://www.internalfb.com/intern/everpaste/?color=0&handle=GDe4BCBV-XwTNLoEAOTnVbCneb1jbr0LAAAz
Output order: (_item_embedding_index_values_tensor_pool__local_shard_pools_0, _item_embedding_index_values_tensor_pool__local_shard_pools_1, getitem_6, getitem_10, getitem_9)

After changes
Full Output: https://www.internalfb.com/intern/everpaste/?handle=GIb_pSL6TjHawBgEALDgarUck4YhbsIXAAAB&phabricator_paste_number=2035191658 
Remote graph: https://www.internalfb.com/intern/everpaste/?color=0&handle=GFmFkB9waX3elzMGAJr9zTcLZiIDbr0LAAAz
Output Order: getitem_6, _item_embedding_index_values_tensor_pool__local_shard_pools_0, _item_embedding_index_values_tensor_pool__local_shard_pools_1, getitem_10, getitem_9

getitem_6 is shifted first after changes.

Differential Revision: D79603009
… pool (meta-pytorch#3533)

Summary:

A few changes in the diff:
1. Support to proportionally shard the tensor pool based on memory capacity per rank. 
2. Using block_bucketize_sparse_features_inference to return bucket_mapping that can be used during request batching in inference w/ custom sigrid predictor engine
3. Wrapping some of the operations with fx wrappers to make it compatible with model split boundaries for DLRM serving where embeddings are sharded and split onto different pytorch modules
4. Exposing set_device() api to some of the modules if we want to place some shards to cpu while others to cuda.
5. Move _get_unbucketize_tensor_via_length_alignment to common util files.

As part of this diff, also had to update some of the test cases. Mainly because updating the forward path a bit leads to reorganization of return values of remote module in the test cases leading to reorganization of batchinfo for each of those output. 

Baseline test
Full Output: https://www.internalfb.com/intern/everpaste/?handle=GMpN0B-VwcKcvAgDAB_hIkOvKrJNbsIXAAAB&phabricator_paste_number=2035160680
Remote graph: https://www.internalfb.com/intern/everpaste/?color=0&handle=GDe4BCBV-XwTNLoEAOTnVbCneb1jbr0LAAAz
Output order: (_item_embedding_index_values_tensor_pool__local_shard_pools_0, _item_embedding_index_values_tensor_pool__local_shard_pools_1, getitem_6, getitem_10, getitem_9)

After changes
Full Output: https://www.internalfb.com/intern/everpaste/?handle=GIb_pSL6TjHawBgEALDgarUck4YhbsIXAAAB&phabricator_paste_number=2035191658 
Remote graph: https://www.internalfb.com/intern/everpaste/?color=0&handle=GFmFkB9waX3elzMGAJr9zTcLZiIDbr0LAAAz
Output Order: getitem_6, _item_embedding_index_values_tensor_pool__local_shard_pools_0, _item_embedding_index_values_tensor_pool__local_shard_pools_1, getitem_10, getitem_9

getitem_6 is shifted first after changes.

Differential Revision: D79603009
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant